Goto

Collaborating Authors

 false sense


Breaking the False Sense of Security in Backdoor Defense through Re-Activation Attack

Neural Information Processing Systems

Deep neural networks face persistent challenges in defending against backdoor attacks, leading to an ongoing battle between attacks and defenses. While existing backdoor defense strategies have shown promising performance on reducing attack success rates, can we confidently claim that the backdoor threat has truly been eliminated from the model? To address it, we re-investigate the characteristics of the backdoored models after defense (denoted as defense models). Surprisingly, we find that the original backdoors still exist in defense models derived from existing post-training defense strategies, and the backdoor existence is measured by a novel metric called backdoor existence coefficient. It implies that the backdoors just lie dormant rather than being eliminated. To further verify this finding, we empirically show that these dormant backdoors can be easily re-activated during inference stage, by manipulating the original trigger with well-designed tiny perturbation using universal adversarial attack.


Inexact Unlearning Needs More Careful Evaluations to Avoid a False Sense of Privacy

Hayes, Jamie, Shumailov, Ilia, Triantafillou, Eleni, Khalifa, Amr, Papernot, Nicolas

arXiv.org Artificial Intelligence

The high cost of model training makes it increasingly desirable to develop techniques for unlearning. These techniques seek to remove the influence of a training example without having to retrain the model from scratch. Intuitively, once a model has unlearned, an adversary that interacts with the model should no longer be able to tell whether the unlearned example was included in the model's training set or not. In the privacy literature, this is known as membership inference. In this work, we discuss adaptations of Membership Inference Attacks (MIAs) to the setting of unlearning (leading to their "U-MIA" counterparts). We propose a categorization of existing U-MIAs into "population U-MIAs", where the same attacker is instantiated for all examples, and "per-example U-MIAs", where a dedicated attacker is instantiated for each example. We show that the latter category, wherein the attacker tailors its membership prediction to each example under attack, is significantly stronger. Indeed, our results show that the commonly used U-MIAs in the unlearning literature overestimate the privacy protection afforded by existing unlearning techniques on both vision and language models. Our investigation reveals a large variance in the vulnerability of different examples to per-example U-MIAs. In fact, several unlearning algorithms lead to a reduced vulnerability for some, but not all, examples that we wish to unlearn, at the expense of increasing it for other examples. Notably, we find that the privacy protection for the remaining training examples may worsen as a consequence of unlearning. We also discuss the fundamental difficulty of equally protecting all examples using existing unlearning schemes, due to the different rates at which examples are unlearned. We demonstrate that naive attempts at tailoring unlearning stopping criteria to different examples fail to alleviate these issues.


Can Directed Graph Neural Networks be Adversarially Robust?

Hou, Zhichao, Zhang, Xitong, Wang, Wei, Aggarwal, Charu C., Liu, Xiaorui

arXiv.org Artificial Intelligence

The existing research on robust Graph Neural Networks (GNNs) fails to acknowledge the significance of directed graphs in providing rich information about networks' inherent structure. This work presents the first investigation into the robustness of GNNs in the context of directed graphs, aiming to harness the profound trust implications offered by directed graphs to bolster the robustness and resilience of GNNs. Our study reveals that existing directed GNNs are not adversarially robust. In pursuit of our goal, we introduce a new and realistic directed graph attack setting and propose an innovative, universal, and efficient message-passing framework as a plug-in layer to significantly enhance the robustness of GNNs. Combined with existing defense strategies, this framework achieves outstanding clean accuracy and state-of-the-art robust performance, offering superior defense against both transfer and adaptive attacks. The findings in this study reveal a novel and promising direction for this crucial research area. The code will be made publicly available upon the acceptance of this work.


The Problem With Weather Apps

The Atlantic - Technology

Technologically speaking, we live in a time of plenty. Today, I can ask a chatbot to render The Canterbury Tales as if written by Taylor Swift or to help me write a factually inaccurate autobiography. With three swipes, I can summon almost everyone listed in my phone and see their confused faces via an impromptu video chat. My life is a gluttonous smorgasbord of information, and I am on the all-you-can-eat plan. But there is one specific corner where technological advances haven't kept up: weather apps.


Is Explainable AI Helpful or Harmful?

#artificialintelligence

"Explainable AI (XAI) is a set of methods aimed at making increasingly complex Machine Learning (ML) models understandable by humans". That's how I defined XAI in a previous post where I argued that XAI is both important and extremely difficult to automate. In a nutshell, XAI is crucial for building trust and understanding with (often non-technical) end-users. This empowers the user to actively use and adapt the system. The goal is to create ML-systems with maximal benefits and minimal accidental misuse.


Face masks can foster a false sense of security

The Japan Times

What's happening in Japan is written all over our faces -- our blank, expressionless, masked faces. Never before, it seems safe to say, have so many people gone about masked. Thus we confront the microbes that assault us. "As self-protection, your mask is practically useless," says Shukan Gendai magazine this month. Commercial face masks, medical authorities say, can block particles measuring 3 to 5 micrometers.


I Brake for Autonomous Vehicle Braking AGL (Above Ground Level)

#artificialintelligence

Trust me, I have no intention of trusting autonomous vehicle braking. One of the terms we see pop up in almost every technical vector is autonomous vehicles. As with 5G, the autonomous vehicle landscape is fraught with hype. That has even spilled over to the consumer marketing arena with tons of ads for automobiles showing hands-off braking, lane navigation, self-parking, and more. Depending upon with whom one speaks, autonomous vehicles are anywhere from level 3 to level 5. Of course, the only one who believes we are at level 5 is Elon Musk, with his claims for Teslas.


False sense of security: why it's time to get real (via Passle)

#artificialintelligence

Artificial intelligence is no longer the stuff of science fiction films. It's already here, driving a Fourth Industrial Revolution which promises to radically reshape the world and society we live in. The changes for the way we live and work is more disruptive than anything since the invention of the wheel – OK, let's be more realistic, the industrial revolution. But let us focus today on cybersecurity as an fundamental part of digital transformation. Much has been made of its application in cybersecurity and threat detection -- and it's certainly helping us a lot here at NTT Security.


AI needs to earn our trust, just like any human relationship

#artificialintelligence

It's awkward to correct a stranger when they're wrong. How did you feel when Miguel from IT, whom you've only met once, told you that "learnings" wasn't a real word? Or when Robin lectured you about the correct pronunciation of "macaron" at the office holiday party? Direct feedback that teaches rather than chides requires trust, and trust takes time. Unlike humans, AI craves to know when it's wrong.


Why Facebook Will Never Fully Solve Its Problems with AI

#artificialintelligence

But it will never fully work. "Proposing AI as the solution leaves a very long time period where the issue is not being addressed, during which Facebook's answer to what is being done is, 'We are working on it,'" Georgia Tech AI researcher Mark Riedl told BuzzFeed News. The algorithm hasn't been trained on enough contextual data. The AI isn't good enough, or maybe there aren't enough Burmese-speaking content moderators -- but don't worry, the tools are being worked on. AI automation also gives the company deniability: If it makes a mistake, there's no holding the software accountable.